On impact and evaluation in Computational Creativity: A discussion of the Turing Test and an alternative proposal

نویسندگان

  • Alison Pease
  • Simon Colton
چکیده

Computational Creativity is the AI subfield in which we study how to build computational models of creative thought in science and the arts. From an engineering perspective, it is desirable to have concrete measures for assessing the progress made from one version of a program to another, or for comparing and contrasting different software systems for the same creative task. We describe the Turing Test and versions of it which have been used in order to measure progress in Computational Creativity. We show that the versions proposed thus far lack the important aspect of interaction, without which much of the power of the Turing Test is lost. We argue that the Turing Test is largely inappropriate for the purposes of evaluation in Computational Creativity, since it attempts to homogenise creativity into a single (human) style, does not take into account the importance of background and contextual information for a creative act, encourages superficial, uninteresting advances in front-ends, and rewards creativity which adheres to a certain style over that which creates something which is genuinely novel. We further argue that although there may be some place for Turing-style tests for Computational Creativity at some point in the future, it is currently untenable to apply any defensible version of the Turing Test. As an alternative to Turing-style tests, we introduce two descriptive models for evaluating creative software, the FACE model which describes creative acts performed by software in terms of tuples of generative acts, and the IDEA model which describes how such creative acts can have an impact upon an ideal audience, given ideal information about background knowledge and the software development process. While these models require further study and elaboration, we believe that they can be usefully applied to current systems as well as guiding further development of creative systems. 1 The Turing Test and Computational Creativity The Turing Test (TT), in which a computer and human are interrogated, with the computer considered intelligent if the human interrogator is unable to distinguish between them, is principally a philosophical construct proposed by Alan Turing as a way of determining whether AI has achieved its goal of simulating intelligence [1]. The TT has provoked much discussion, both historical and contemporary, however this has principally been within the philosophy of AI: most AI researchers see it as a distraction from their goals, encouraging a mere trickery of intelligence and ever more sophisticated natural language front ends, as opposed to focussing on real problems. Despite the appeal of the (as yet unawarded) Loebner Prize, most subfields of AI have developed and follow their own evaluation criteria and methodologies, which have little to do with the TT. 1 School of Informatics, University of Edinburgh, UK 2 Department of Computing, Imperial College, London, UK Computational Creativity (CC) is a subfield of AI, in which researchers aim to model creative thought by building programs which can produce ideas and artefacts which are novel, surprising and valuable, either autonomously or in conjunction with humans. There are three main motivations for the study of Computational Creativity: • to provide a computational perspective on human creativity, in order to help us to understand it (cognitive science); • to enable machines to be creative, in order to enhance our lives in some way (engineering); and • to produce tools which enhance human creativity (aids for creative individuals). Creativity can be subdivided into everyday problem-solving, and the sort of creativity reserved for the truly great, in which a problem is solved or an object created that has a major impact on other people. These are respectively known as “little-c” (mundane) and “bigC” (eminent) creativity [2]. Boden [3] draws a similar distinction in her view of creativity as search within a conceptual space, where “exploratory creativity” searches within the space, and “transformational creativity” involves expanding the space by breaking one or more of the defining characteristics and creating a new conceptual space. Boden sees transformational creativity as more surprising, since, according to the defining rules of the conceptual space, ideas within this space could not have been found before. There are two notions of evaluation in CC: (i) judgements which determine whether an idea or artefact is valuable or not (an essential criterion for creativity) – these judgements may be made internally by whoever produced the idea, or externally, by someone else and (ii) judgements to determine whether a system is acting creatively or not. In the following discussion, by evaluation, we mean the latter judgement. Finding measures of evaluation of CC is an active area of research, both influenced by, and influencing, practical and theoretical aspects of CC. It is a particularly important area, since such measures suggest ways of defining progress in the field, as well as strongly guiding program design. While tests of creativity in humans are important for our understanding of creativity, they do not usually cause humans to be creative (creativity training programs, which train people to do well at such tests, notwithstanding). Ways in which CC is evaluated, on the other hand, will have a deep influence on future development of potentially creative programs. Clearly, different modes of evaluation will be appropriate for the different motivations listed above. 3 The necessity for good measures of evaluation in CC is somewhat paralleled in the psychology of creativity: “Creativity is becoming a popular topic in educational, economic and political circles throughout the world – whether this popularity is just a passing fad or a lasting change in interest in creativity and innovation will probably depend, in large part, on whether creativity assessment keeps pace with the rest of the field.” [4, p. 64] The Turing Test is of particular interest to CC for two reasons. Firstly, unlike the general situation in AI, the TT, or variations of it, are currently being used to evaluate candidate programs in CC. Thus, the TT is having a major influence on the development of CC. This influence is usually neither noted nor questioned. Secondly, there are huge philosophical problems with using a test based on imitation to evaluate competence in an area of thought which is based on originality. While there are varying definitions of creativity, the majority consider some interpretation of novelty and utility to be essential criteria. For instance, one of the commonalities found by Rothenberg in a collection of international perspectives on creativity is that “creativity involves thinking that is aimed at producing ideas or products that are relatively novel” [5, p.2], and in CC the combination of novelty and usefulness is accepted as key (for instance, see [6] or [3]). In [4], Plucker and Makel list “similar, overlapping and possibly synonymous terms for creativity: imagination, ingenuity, innovation, inspiration, inventiveness, muse, novelty, originality, serendipity, talent and unique”. The term ‘imitation’ is simply antipodal to many of these terms. In the following sections, we firstly describe and discuss some attempts to evaluate Computational Creativity using the Turing Test or versions of it (§2), concluding that these attempts all omit the important aspect of interaction, and suggest the sort of direction that a TT for a creative computer art system might follow. We then present a series of arguments that the TT is inappropriate for measuring creativity in computers (or humans) in §3, and suggest that although there may be some place for Turing-style tests for Computational Creativity at some point in the future, it is currently untenable and impractical. As an alternative to Turing-style tests, in §4, we introduce two descriptive models for evaluating creative software, the FACE model which describes creative acts performed by software in terms of tuples of generative acts, and the IDEA model which describes how such creative acts can have an impact upon an ideal audience, given ideal information about background knowledge and the software development process. We conclude our discussion in §5. 2 Attempts to evaluate Computational Creativity using the Turing Test or versions of it There have been several attempts to evaluate Computational Creativity using the Turing Test or versions of it. While these are useful in terms of advancing our understanding of CC, they do not go far enough. In this section we discuss two such advances (§2.1 and §2.2), and two further suggestions on using human creative behaviour as a guide for evaluating Computational Creativity (§2.3). We highlight the importance of interaction in §2.4. 2.1 Discrimination tests Pearce and Wiggins [7] assert for the need for objective, falsifiable measures of evaluation in cognitive musicology. They propose the ‘discrimination test’, which is analogous to the TT, in which subjects are played segments of both machine and human-generated music and asked to distinguish between them. This might be in a particular style, such as Bach’s music, or might be more general. They also present one of the most considered analyses of whether Turing-style tests such as the framework they propose might be appropriate for evaluating Computational Creativity [7, §7]. While they do not directly refer to Boden’s exploratory creativity [3], instead referring to Boden’s distinction between psychological (P-creativity, concerning ideas which are novel with resepct to a particular mind) and historical creativity (H-creativity, concerning ideas which are novel with respect to the whole of human history), they do argue that much creative work is carried out within a particular style. They cite Garnham’s response [8] to Boden’s ideas, in which he emphasizes the importance of exploratory as compared to transformational creativity: “the origins of the symphony are lost in history and its major triumphs are the work of composers who did not invent the basic symphonic form.” (Bundy argues along similar lines in [9]). Thus, Pearce and Wiggins suggest that their test rewards an appropriate level of novelty, since they found in their experiments that subjects could identify machine-generated compositions which were either too strange (too far away from well-explored areas) or too predictable (conforming too much to the well-explored areas). In anticipation of the objection that the process by which something has been created is important to judgements of creativity and thus a behaviour-based test is insufficient, Pearce and Wiggins refer to Hofstadter’s argument that interaction with a system at an arbitrarily deep level can shed great insight into the processes it uses to generate its output [10]. While seeing the evaluation of the creativity of machine composers as an extension of their framework rather than a fully developed aspect, Pearce and Wiggins suggest that this type of evaluation is relevant for musical creativity within a specific style (that is, exploratory creativity). They also suggest that it may generalise to other creative domains such as art or story generation. 2.2 A Turing Test for artistic creativity In [11], Boden discusses the Turing Test and artistic creativity. She provides an interpretation of the Turing Test which is specifically designed for computer art systems: “I will take it that for an ‘artistic’ program to pass the TT would be for it to produce artwork which was: 1. indistinguishable from one produced by a human being; and/or 2. was seen as having as much aesthetic value as one produced by a human being.” [11, p. 409] Boden describes several systems which produce art or music, which she considers to be either non-interactive or unpredictably interactive (such as a piece of art which responds to audience members or participants in ways they do not understand). She discusses comparisons with both mediocre human art, in this case pastiches of given styles (perhaps comparable to work by an art student exploring a given style), as well as examples which match world class human art, of interest as an artwork in itself (comparable to work done by a practising artist). She argues that the following systems all pass (her version of) the TT: • Richard Brown’s Starfish – a computer generated starfish which appeared to be trapped inside a glass table, which interacted with audience members by responding to their movements and sounds. This featured in the Millennium Dome; • AARON, a software program written by the artist Harold Cohen that creates original artistic images which are exhibited in art galleries around the world (described by McCorduck in [12]); 4 Note that these two types of creativity are not analogous to the little-c/big-C distinction, since Boden talks of P-creativity being a subset of H-creativity [3, pp. 32-33]. 5 For further details, see http://www.mimetics.com/vur/mindzone.html. • Computer art by Boden and Edmunds [13] which was exhibited in honour of world famous artists. This was composed of vertical stripes of colour which were continually changing, where the colours were partially determined by audience participation in an unpredictable manner, with constraints on certain colour combinations; • Cope’s system Emmy (Experiments in Musical Intelligence) [14, 15] which generated music in particular styles, such as that of Mozart, which was indistinguishable from human-composed Mozart pastiches, and was performed in concert halls. Boden argues that these systems satisfy the second criterion: their aesthetic value has been proven by the degree of interest in their work (presumably, from members of the public, artists and musicians, rather than solely AI researchers). These all model exploratory creativity, where a style is explored. For examples of transformational creativity, Boden refers to systems by Todd and Latham [16] and Sims [17]. However, since these are much more interactive, she does not (yet) consider them to be candidates for the TT. Regarding the first criterion, Boden mentions anecdotally some occasions on which critics have admired a piece of art and then retracted the view when the art was discovered to be machine-generated. This suggests that, in some cases at least, systems have satisfied her first criterion. We have a number of objections to Boden’s usage of the term ‘Turing Test’ for the above evaluation criteria. Firstly, Boden reinterprets the TT and presents her own version, which differs substantially from Turing’s proposal in at least two ways: (i) there is no interaction with the system, and (ii) by using a disjunctive rather than conjunctive relationship between the two criteria, she allows that all systems which produce output with “as much aesthetic value as produced by a human being” passes the TT. Systems which produce output of sufficient interest to be exhibited are therefore evaluated to have passed the TT. In particular, Boden argues that “If being exhibited alongside Rothko, in a ‘diamond jubilee’ celebration of these famous artists, does not count as passing the Turing Test, then I do not know what would.” [11, p. 410]. This lack of emphasis either on interaction, or on discrimination between human and computer-produced artefacts seems to be rather missing the point of the TT. In particular, Boden seems to have expanded the term ‘Turing Test’ from being just one way of testing that intelligence might have been exhibited, to being a way of testing whether software has done something (or produced something) culturally significant. Our second objection is that the evidence for the second criterion, which is closest to the TT, is never explicitly addressed, and only implicitly in an anecdotal fashion. In fact, we see Boden’s argument as supporting the idea that computercreated art may very well be distinguishable from human-created art, yet still have great aesthetic and cultural value, (see §3.1 for further argument on this point); that is, that the TT is inappropriate in this context. Clearly, art generation software could fail the originally conceived Turing Test, yet pass Boden’s version of it. Despite our objections to using a misleading naming based on the Turing Test, Boden’s criteria can certainly be valuable for evaluating creative systems. However, we would caution that software which exhibits very little behaviour that would normally be considered (in computing or human circles) as creative can be evaluated positively using Boden’s criteria. In particular, Brown’s Starfish project, while a beautiful demonstration of neural net technology, and an exciting piece of human-computer interaction, certainly cannot be described as an example of software acting creatively. It is an example of kinetic art which was conceived, designed, produced, programmed and evaluated by humans (Richard Brown, Jonathan Mackenzie and Gavin Baily). While the software is generative, and to some extent unpredictable, it exhibits no higher level cognitive functioning such as the generation and/or application of aesthetic considerations or any behaviour which might be deemed remotely imaginative. While Boden’s criteria for the assessment of art-generating software are valid, we argue that calling it a Turing Test confuses the assessment of intelligence and creativity with the assessment of cultural impact, and that software which wouldn’t ordinarily be considered creative can pass the test, hence the criteria have limited value for the assessment of software developed in a Computational Creativity context. 2.3 Using human creative behaviour as a guide for evaluating Computational Creativity Wiggins proposes the following working definition of Computational Creativity: “The performance of tasks [by a computer] which, if performed by a human, would be deemed creative.” [18, p. 451] This type of behavioural test, in which output from a computer is compared to that from humans, has much in common with the Turing Test. In addition, Colton [19] has argued that creativity in software is often marked negatively, i.e., while there may be no obvious set of behaviours that software must exhibit in order to be regarded as creative, there are some common ways in which software can be immediately disregarded as being uncreative. In particular, Colton proposes that the criticisms levelled at software can largely be grouped into three categories: the software doesn’t exhibit enough (or the right kind of) skill; the software has no appreciation of what it is doing, what it produces or what other people/machines do; the software exhibits no imagination in its processing. Hence, he suggests that Computational Creativity researchers should aim to build software which exhibits behaviour that might be deemed as skilful, appreciative and imaginative. 2.4 The importance of interaction All of the versions of the TT which we have discussed here have one obvious similarity; there is no interaction with the program. This leaves out what is, arguably, the main strength of the TT. We have already introduced Hofstadter’s argument that interaction with a system at an arbitrarily deep level can shed great insight into the processes it uses to generate its output (see §2.1). Hofstadter goes on to say: “In the spirit of much of the best science of our century, the Turing Test blurs the supposedly sharp line between probing of behavior and probing of mechanisms, as well as the supposedly sharp line between “direct” and “indirect” observation, and thus reminds us of the artificiality of such distinctions. Any computer model of mind that passes a truly deep Turing Test one that probes for the fundamental mechanisms of thought will agree with “brain structures” all the way down to the level where the essence of thinking really takes place.” [10, pp. 490491] The key word here is ‘probe’: interaction must form a necessary part of any test based on the TT, for it to hold any relevance to CC. For example, a Turing Test for artistic creativity which consisted of requests to draw something specific might be informative. A human interrogator might attempt to distinguish between a computer art system and a human artist by making requests, such as: • Draw something in the style of Picasso. • Can you break/change/enhance the rules of the Impressionist style and draw something within the new style you’ve just created? • Draw something which reflects your feelings towards the war in Afghanistan. • Draw something warm. • Show me your best painting and explain to me why you think it’s good. • Who or what has influenced your work? • How does your work fit into the wider artistic community? In order to avoid pitfalls of the current TT and focus on the important issues, the test could be conducted without the need for natural language, timing issues, and so on. 3 Arguments that the Turing Test is inappropriate for measuring creativity in computers (or humans) In this section, we argue that the Turing Test is largely inappropriate in the context of CC. Attempts to pass the Turing Test may result in losing differing, and valuable, styles of creativity (§3.1); might fail to take into account the importance of background and contextual information for a creative act (§3.2); encourage superficial, uninteresting advances in front-ends (§3.3); and result in rewarding creativity which adheres to a certain style over that which creates something which is genuinely novel (§3.4). We suggest that although there may be some place for Turing-style tests for Computational Creativity at some point in the future, it is currently impractical (§3.5). 3.1 The Turing Test penalises different styles of creativity Creativity is a cultural notion, and people around the world understand, study and assess human creativity in many different ways, as detailed in [20]. There are also many different categories of creative humans: for instance, people with cognitive disorders such as autism, people with mental health problems, different nationalities and tribes, different genders, and what mathematician Alexander Borovik calls “that forgotten tribe of humanity, children”. We can often distinguish creative work performed by one of these groups; developmental psychologists can determine approximate age of a creator during childhood, people can often determine gender or nationality of an author, and so on. We do not discriminate against any of these categories purely because they are identifiable, rather we relish their differences. A writer with autism might tend to write more literally than one without, who might employ devices such as metaphor and imagery in their work. An artist with synaesthesia who can taste colour may well use colour differently to an asynaesthete. A poet under the influence of drugs might have different sorts of insights than when they were sober. A Chinese percussionist will compose music which is different to that of an African drummer. We can extend this to include animal creativity: the (plain looking) male Vogelkop Bowerbird will decorate the lawn in front of its bower in order to attract female Bowerbirds – we doubtless could distinguish a lawn which 6 These requests could be translated into a language which the program understands, without cheating, thus bypassing the need for verbal interaction. 7 Personal communication. has been decorated by a human to one decorated by a Bowerbird [21] (who, for instance, has been known to consider litter such as Snickers wrappers to be highly decorative). In all of these, and countless more examples, it would be absurd to suggest that a member of one group is less creative than a member of another simply on the grounds that we can distinguish which category they fall into. From here it is a natural step to argue that we should not discriminate against computers, even if their brand of creativity turns out to be distinguishable from human creativity (clearly this argument depends on one’s motivation for studying CC). Negrotti [23] suggests that instead of continuing to judge the computer’s capabilities directly against those of the human mind, the potentials of the computer as an ‘alternative intelligence’ can be explored. Re-conceiving the nature of our interaction with the computer leads to a less impoverished appreciation of the human-computer as a creative assemblage. Just as it may be productive to think of the A in AI as standing for a respectable “alternative”, rather than the rather derogatory “artificial”, it may be productive in CC to aim to build systems which are creative in ways which are unique to machines. Humans and machines have different strengths, and rather than attempting to shoe-horn machines into a way of thinking which can be passed off as human, we should aim to develop computational systems which make the most of their strengths. It is simply carbon fascism to argue that only biological creativity is worth studying. Bedworth and Norwood [24] argue along such lines: instead of perceiving AI as recreating humans, they suggest that we should develop intelligent devices whose complexity could be used to complement human ability. Such devices would differ from the human mind in terms of nature and power, but be compatible with it. The TT forces us into the undesirable position, to paraphrase Hofstadter, of trying to make a machine act like it is not a machine. 3.2 The Turing Test cannot take framing information into account The context in which an idea or artefact has been created can affect how creative we judge the originator to be, and the value we ascribe to the idea/artefact. For example, an idea may be considered interesting if produced by a child or novice, yet dull if produced by an adult or expert, and similarly, the child/novice may be seen as more creative that the adult/expert. That is, the very thing that we are supposed to determine in a TT (who is responsible for a certain piece of work) is necessary information in the judgement of creativity. For that reason interaction is key, so the versions of the TT above which omit this, make the evaluation impossible. For instance, in the poetry magazine Anon, in which reviewers use the double blind review process to decide whether to accept or reject a poem, Askew [26] considers the difficulties of reviewing poetry without knowledge of the author. As an example, she cites a poem on childbirth, arguing that if it was written by a mother she would consider it rather mediocre, but if written by a man then she would consider it to be insightful and thoughtful. There is much work on the advantages and disadvantages 8 In psychology, inter-group comparisons have focussed on whether one group is more creative than another. For instance, work in developmental psychology such as [22] suggests that familiarity with a domain can be necessary for the flexibility required for creativity (Boden also subscribes to this view in her metaphor of exploration and transformation of conceptual spaces). Possible links between madness and creativity has been much explored, with proponents on either side (see [5]). 9 The original quote is “... sometimes I think that all of AI has something of this playful, spoofing character. It is, after all, a delightful game to try and make a machine act like not a machine,” ...[25, p. 475] of blind peer review (for example [27]): while there are sometimes good arguments for double blind review, it is widely acknowledged to be difficult to fully evaluate a paper without the framing information of authorship and context. 3.3 The Turing Test rewards ‘window dressing’ and trickery Many of the objections for using the TT to evaluate progress in AI carry over to CC. We shall not discuss most of them here: the most apt to creativity is a remark made by Lady Lovelace in her memoir on Babbage’s Analytical Engine: “The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform.” Turing considers this objection in [1]; both his response and Lady Lovelace’s objection are explored by Boden [3] and Bringsjord, Bello and Ferrucci [28] and we do not expand them. Hofstadter [10] addresses the issue we raised in §1 about encouraging developers of programs to focus on the wrong thing. He argues that in order to avoid the “race for flashier and flashier naturallanguage ‘front ends’ with little substance behind them”, the person in the interrogator role must ask questions at the right sort of level, which will be difficult to achieve, and comments that “What is needed is a prize for advances in basic research, not a prize for window-dressing.” [25, p. 491]. Techniques such as using random numbers to create what Hofstadter calls an “Artificial Wiggliness”, in order to more closely resemble a hand-drawn figure could be seen in some situations as the equivalent in art programs of “flashy naturallanguage front ends”. This is a technique used in the letterformprocessing program MetaFont [29], as well as in AARON, and is hypothesised by Hofstadter to be key in our willingness to attribute AARON with artistic insight, despite being a simple, surface technique, of no real interest to CC researchers. Bringsjord et al. [28] argue that those in AI who do use the TT as a motivating goal know that they are competing in trickery; they are building programs which can fool a judge into believing that they are intelligent, rather than actually being intelligent. Thus, their goal is to create an agent which has a Chinese Room Argument-style rulebook comprehensive enough to be able to convince a judge: “In such scenarios it’s really the human creators against the human judges; the intervening computation is in many ways simply along for the ride” [28, p. 2]. 3.4 The Turing Test encourages pastiche In §1 we argued that the motivation of the CC researcher will affect which evaluation criteria are appropriate. The problems with the TT and Computational Creativity are present, to different degrees, in different types of creativity, such as Boden’s exploratory and transformational creativity, and other distinctions between everyday creativity and truly great creativity. In some circumstances, it may be appropriate for exploratory search to drive creative acts, but in others, this leads only to pastiche. As a particular example, while Photoshop image filters can produce images which look remarkably Impressionistic, it is very difficult to ascribe creativity to such processes as they do not innovate in either process or aesthetic evaluation. Given the value of such processes for graphic designers, etc., there is a danger that CC researchers will aim to write such pastiche generation software, missing the point of innovation and imagination in the creative process, and holding the study of creativity in software back, whatever the motivation of the CC researcher. 3.5 The Turing Test is simply too hard We have seen that Boden argues that some systems have already passed her version of the TT. Similarly, Hofstadter argues that AARON’s creations could “almost certainly be passed off as human art”, and that they “look surprisingly like products of a sophisticated human artist” [10, p. 468]. Thus if we base a version of the TT on an inability to distinguish between human and computer-produced ideas, it appears that some systems may pass this test. However, in §2.4 we argue that tests based on the TT should include some form of interaction, and we suggested the sort of lines a TT for artistic creativity might follow. None of the systems so far discussed (nor any other in existence today) is anywhere close to passing this sort of test. Thus, even if the TT may at some point be a useful test of CC, it is not currently viable. While it may be useful to have a difficult (possibly unattainable) goal as an overall motivation, in practice CC needs pragmatic ways of measuring intermediate progress, which will enable us to objectively and falsifiably claim that program P1 is more creative in ways X, Y and Z than program P2 (where P1 and P2 may be different versions of the same program). Boden [3] suggests that it is more helpful to ask ‘where does x lie in creativity space?’ (assuming a continuous n-dimensional space for n criteria where we can measure each dimension), than ‘is x creative?’ (assuming a Boolean judgement), or even ‘how creative is x?’ (assuming a linear judgement). Turing-style tests do not allow for such subtleties. The recommendation of focusing on achievable goals in CC is echoed by

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Some improvements in fuzzy turing machines

In this paper, we improve some previous definitions of fuzzy-type Turing machines to obtain degrees of accepting and rejecting in a computational manner. We apply a BFS-based search method and some level’s upper bounds to propose a computational process in calculating degrees of accepting and rejecting. Next, we introduce the class of Extended Fuzzy Turing Machines equipped with indeterminacy s...

متن کامل

The Lovelace 2.0 Test of Artificial Creativity and Intelligence

Observing that the creation of certain types of artistic artifacts necessitate intelligence, we present the Lovelace 2.0 Test of creativity as an alternative to the Turing Test as a means of determining whether an agent is intelligent. The Lovelace 2.0 Test builds off prior tests of creativity and additionally provides a means of directly comparing the relative intelligence of different agents.

متن کامل

Investigating the Relationship between Educational Dimensions of Entrepreneurial Organizational Culture and Creativity in Faculty Members of Tehran University

Purpose: Entrepreneurship culture needs to have a favorable environment for innovation and creative responsiveness to environmental needs. Entrepreneurial culture emphasizes identifying, supporting, and developing creative talents. The prevalence of this kind of culture makes universities, in their confrontation with the threats of competitors, the innovation, creativity, and risk-taking to exp...

متن کامل

Impact of creativity training on fluid components, ingenuity, flexibility, expansion In hands-free architecture design training workshops

One of the fundamental and constructive features of human beings is creativity which plays an important role in the growth and development of human beings and human civilization. Researchers believe that creativity training is effective in enhancing it. The purpose of this study was to investigate the effect of teaching metacognitive components of creativity in hands-free design training worksh...

متن کامل

The Impact of E-Learning on Creativity and Learning in Physiology Course in Nursing Students of Shahrekord University of Medical Sciences

Background: Rapid advancement of information and communication technologies has transformed the methods of education. It has created methods which help students achieve deep and effective learning as well as develop a desirable level of creativity. Implementation of e-learning in education is one of such methods. The purpose of the present research is to study the impact of self-learning (self-...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011